Master-worker model for MapReduce paradigm on the TILE64 many-core platform
نویسندگان
چکیده
MapReduce is a popular programming paradigm for processing big data. It uses themaster–workermodel, which is widely used on distributed and loosely coupled systems such as clusters, to solve large problems with task parallelism.With the ubiquity ofmany-core architectures in recent years and foreseeable future, the many-core platform will be one of the main computing platforms to execute MapReduce programs. Therefore, it is essential to optimize MapReduce programs on many-core platforms. Optimizations of parallel programs for amany-core platform are viewed as amultifaceted problem,where both system and architectural factors should be taken into account. In this paper, we look into the problem by constructing a master–worker model for MapReduce paradigm on the TILE64 many-core platform. We investigate master share and worker share schemes for implementation of a MapReduce library on the TILE64. The theoretical analysis shows that the worker share scheme is inherently better for implementation of MapReduce library on the TILE64 many-core platform. © 2013 Elsevier B.V. All rights reserved.
منابع مشابه
Parallelization of Motion JPEG Decoder on TILE64 Many-Core Platform
The ubiquity of many-core architectures poses challenges to software developers to make scalable software. To parallelize data-intensive applications on a many-core platform, one has to consider both hardware architecture and software characteristics when writing parallel codes. In this paper, we take Motion JPEG decoder as an example data-intensive application and take TILE64 as an example man...
متن کاملPerformance model for Master/Worker hybrid applications
Master/worker is a commonly used parallel/distributed programming paradigm. Many applications are developed following such paradigm. This paradigm can be easily implemented using message passing programming libraries (MPI), but moreover, the multicore features of current nodes can be exploited at the node level by applying thread parallelism (OpenMP). In this way Master/Worker applications are ...
متن کاملTowards dynamic adaptability support for the master-worker paradigm in component based applications
When executing scientific applications, resources that may be used can vary from multi-core processors to grids. Therefore, abstracting the programming model enables portability on various resource infrastructures. Furthermore, software component technology appears to be a very promising approach to deal with the growing complexity of scientific applications. Hence, we proposed a model to impro...
متن کاملAddressing Big Data with Hadoop
Nowadays, a large volume of data from various resources such as social media networks, sensory devices and other information serving devices are produced. This large collection of unstructured, semi structured data is called big data. The conventional databases and data ware houses can’t process this data. So we need new data processing tools. Hadoop addresses this need. Hadoop is an open sourc...
متن کاملUsing Pattern Classification for Task Assignment in MapReduce
MapReduce has become a popular paradigm for large scale data processing in the cloud. The sheer scale of MapReduce deployments make task assignment in MapReduce an interesting problem. The scale of MapReduce applications presents unique opportunity to use data driven algorithms in resource management. We present a learning based scheduler that uses pattern classification for utilization oriente...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Future Generation Comp. Syst.
دوره 36 شماره
صفحات -
تاریخ انتشار 2014